A Knowledge Based Tool for Checking Large Knowledge Bases
نویسندگان
چکیده
We present a tool for detecting conflicts and redundancies among large knowledge bases. First it allows the reduction of the combinatory by selecting among all the rules only those liable to generate conflicts ; it does this using properties of the rules and heuristics. Then some attributes linking chunks of knowledge constitute access paths allowing to reduce the search duration for a particular information. They allow also to keep trace of the results of previously accomplished treatments that will be useful in future process, in order to avoid needlessly repeating treatments. Finally as the knowledge used to accomplish these treatments and checks is itself expressed in inference rule formalism, the system may be applied to itself. Introduction To respond to the increasing importance of knowledge based systems, many methodologies and tools have been developed in order to assist building these systems. Aiming at modelling experts' knowledge and reasoning, the knowledge contained in the system is heuristic and not well defined. So traditional methods for validating software are no longer available. The primary objective is to obtain a system that is on one hand valid, that solves problems correctly the way a human expert would. And on the other hand, it has to be complete so it resolves all the problems that may be faced by the user. These are the objectives of the verification and validation tasks. Verification consists of checking that a system is constructed correctly according to the specifications : “build the system right”. It tries to detect programming errors. Errors are tightly related to the knowledge representation formalism, and in the case of rule based systems to the implicit control used by the inference mechanism. Validation [Laurent 92] consists of verifying that the constructed system satisfies the needs of the users : “build the right system”. For systems using inference rules formalism to represent knowledge, verification becomes a combinatorial problem. So a systematic and deep approach in detecting anomalies is difficult and becomes impossible when the base grows in size. So it becomes necessary to dispose of ways to reduce the combinatory. It is in the context of satisfying this objective that this work is done. After a brief description of the actual best known tools and methodologies in validating knowledge bases, we will begin by describing the environment in which the system has been developed. This will bring in evidence all the hypotheses considered in this work. Then we will define the different concepts used and the problematic, and proceed to a more detailed description of the different treatments applied on rules. A comparison of our system with some others will follow before concluding by a discussion of our approach stating its advantages and disadvantages. 1. State of the art Many tools have been developed to address the knowledge based systems verification [Lopez et al 90] [Coenen and Bench-cap 93] [O’Keefe and O’Leary 93] [Ayel and Rousset 90] [Gupta 91]. They can be divided into two categories [O’Keefe and O’Leary 93] 1.1 Domain independent tools Domain independent tools try to detect anomalies which consist of an abuse or unusual use of the Knowledge representation scheme. Some are based on decision table methods ESC [Cragen and Steudel 87] RCP [Suwa et al 82] CHECK [Nguyen et al 87]. They separate rules' conditions and actions parameters. Conditions are given along the X-axis and actions along the Y-axis. Algorithms then examine the existence of relationships among rows and columns. The drawback of this method is that it is useful only for small rule bases, because the table grows to an unmanageable proportion. COVER [Preece and Shingal 92] constructs a graph representation of the rule base. This allows it to detect anomalies across numerous rules rather than between pairs of rules as is common with the table based approaches. KB-reducer [Ginsberg 88] is a system for checking all potential inconsistencies and redundancies in rule bases. It transforms rules into a logical form, then it computes for each hypothesis its labels following the ATMS terminology [De Kleer 86]. Anomalies are detected during this labelling process. Some systems like INDE [Pipard 88] and [Agarwal and tanniru 91], use Petri nets to represent the rule base which can then be tested for detecting inconsistencies incompleteness and non fireable rules, using existing methods. 1.2 Domain dependent tools. Domain dependent tools use meta-knowledge from the domain to verify the validity of the knowledge. One of the best known examples in this category is EVA [Chang et al 90]. It consists of an integrated set of generic tools that enable the user to check the validity of the Knowledge base. It also incorporates an extended structure checker which tests for synonymy between terms and inheritance information using meta-facts. SACCO [Ayel 87] defines the coherence model as the set of all the properties and semantic constraints. The verification of the dynamic coherence is not systematic. In fact the system gives the expert means to limit them to some parts of the coherence model. This will be done by using meta-informations like the constraints relevance coefficient or the concerned concepts. In TIBRE [Lalo 89] dynamic checks are entirely guided by the expert. He specifies the coherence constraints that he wants to verify. A solver, that is a prolog program, checks for each of these constraints whether it can be obtained from a coherent fact base. 2. The development environment. Our tool is developed in the Gosseyn Machine Environment [Fouet 87] . The Gosseyn Machine enables the expert to create his knowledge about a domain. It is a non monotonic system, based upon First Order Logic. It uses the object and rule formalisms to represent knowledge. The tool constitutes a component of a more complete system whose function is to provide an assistance to the expert throughout the construction of the knowledge base. Each new created knowledge will be studied, confronted with some other chunks of knowledge contained in the base, searching for anomalies or deficiencies. These checks will be of two types. Local checks, verifying that the description of the new knowledge satisfies the structural and semantic constraints. And global checks to make sure that its integration to the current content of the base will not generate new anomalies or imperfections. The results of these checks consist in a set of messages, sent to the user, indicating the chunks of knowledge that have to be corrected, completed or even created. Once the new knowledge is judged valid, it will be integrated into the knowledge base, and in consequence the system will update a certain number of meta-attributes in order to take account of this new knowledge.
منابع مشابه
A Case-based Collaborative Knowledge acquisition Tool
We describe a fault tolerant client-server application that supports collaboration between knowledge engineers for developing case-based expert systems. The server holds knowledge bases and performs cases acquisition-verification, cases maintenance, cases reasoning, multimedia acquisition, and security checking. The clients are a number of specialized components which have the following functio...
متن کاملTowards ABox Modularization of semi-expressive Description Logics
In the last years, the vision of the Semantic Web fostered the interest in reasoning over large and very large sets of assertional statements in knowledge bases. Traditional tableau-based reasoning systems perform bad answering queries over large data sets, because these reasoning systems are based on efficient use of main memory data structures. Increasing expressivity and worstcase complexity...
متن کاملVeriication of Knowledge Bases Based on Containment Checking Marie-christine Rousset
Building complex knowledge based applications requires encoding large amounts of domain knowledge. After acquiring knowledge from domain experts, much of the eeort in building a knowledge base goes into verifying that the knowledge is encoded correctly. We consider the problem of verifying hybrid knowledge bases that contain both Horn rules and a terminology in a description logic. Our approach...
متن کاملKnowledge- eduction: A New A oath to Checking Knowledge Bases for Inconsistency Redundancy
This paper presents a new approach, called knowledge-base reduction, to the problem of checking knowledge bases for inconsistency and redundancy. The algorithm presented here makes use of concepts and techniques that have recently been advocated by de Kleer [deKleer, 19861 in conjunction with an assumption-based truth maintenance system. Knowledge-base reduction is more comprehensive than previ...
متن کاملA knowledge Conversion Tool for Expert Systems
Most of expert systems use the text-oriented knowledge bases. However, knowledge management using the knowledge bases is considered as a huge burden to the knowledge workers because it includes some troublesome works. It includes chasing and/or checking activities on Consistency, Redundancy, Circulation, and Refinement of the knowledge. In those cases, we consider that they could reduce the bur...
متن کاملCRIB: A Method for Integrity Constraint Checking on Knowledge Bases
The necessity of verification tools for Knowledge Based-Systems (KBSs), that help to guarantee a certain degree of quality and reliability of these systems will increase in the future when more critical systems are developed in areas such as industry, science, business, etc. One of the objectives of the KBSs verification is to assure the consistency and completeness of the Knowledge Base (KB). ...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 1996